54 research outputs found

    Unsupervised Continual Learning From Synthetic Data Generated with Agent-Based Modeling and Simulation: A preliminary experimentation

    Get PDF
    Continual Learning enables to learn a variable number of tasks sequentially without forgetting knowledge obtained from the past. Catastrophic forgetting usually occurs in neural networks for their inability to learn different tasks in sequence since the performance on the previous tasks drops down in a significant way. One way to solve this problem is providing a subset of the previous examples to the model while learning a new task. In this paper we evaluate the continual learning performance of an unsupervised model for anomaly detection by generating synthetic data using an Agent-based modeling and simulation technique. We simulated the movement of different types of individuals in a building and evaluate their trajectories depending on their role. We collected training and test sets based on their trajectories. We included, in the test set, negative examples that contain wrong trajectories. We applied a replay-based continual learning to teach the model how to distinguish anomaly trajectories depending on the users’ roles. The results show that using ABMS synthetic data it is enough a small percentage of synthetic data replay to mitigate the Catastrophic Forgetting and to achieve a satisfactory accuracy on the final binary classification (anomalous / non-anomalous)

    High-Performance Computing and ABMS for High-Resolution COVID-19 Spreading Simulation

    Get PDF
    This paper presents an approach for the modeling and the simulation of the spreading of COVID-19 based on agent-based modeling and simulation (ABMS). Our goal is not only to support large-scale simulations but also to increase the simulation resolution. Moreover, we do not assume an underlying network of contacts, and the person-to-person contacts responsible for the spreading are modeled as a function of the geographical distance among the individuals. In particular, we defined a commuting mechanism combining radiation-based and gravity-based models and we exploited the commuting properties at different resolution levels (municipalities and provinces). Finally, we exploited the high-performance computing (HPC) facilities to simulate millions of concurrent agents, each mapping the individual’s behavior. To do such simulations, we developed a spreading simulator and validated it through the simulation of the spreading in two of the most populated Italian regions: Lombardy and Emilia-Romagna. Our main achievement consists of the effective modeling of 10 million of concurrent agents, each one mapping an individual behavior with a high-resolution in terms of social contacts, mobility and contribution to the virus spreading. Moreover, we analyzed the forecasting ability of our framework to predict the number of infections being initialized with only a few days of real data. We validated our model with the statistical data coming from the serological analysis conducted in Lombardy, and our model makes a smaller error than other state of the art models with a final root mean squared error equal to 56,009 simulating the entire first pandemic wave in spring 2020. On the other hand, for the Emilia-Romagna region, we simulated the second pandemic wave during autumn 2020, and we reached a final RMSE equal to 10,730.11

    CROSS-DIFFUSION EFFECTS ON STATIONARY PATTERN FORMATION IN THE FITZHUGH-NAGUMO MODEL

    Get PDF
    We investigate the formation of stationary patterns in the FitzHugh-Nagumo reaction-diffusion system with linear cross-diffusion terms. We focus our analysis on the effects of cross-diffusion on the Turing mechanism. Linear stability analysis indicates that positive values of the inhibitor cross-diffusion enlarge the region in the parameter space where a Turing instability is excited. A sufficiently large cross-diffusion coefficient of the inhibitor removes the requirement imposed by the classical Turing mechanism that the inhibitor must diffuse faster than the activator. In an extended region of the parameter space a new phenomenon occurs, namely the existence of a double bifurcation threshold of the inhibitor/activator diffusivity ratio for the onset of patterning instabilities: for large values of inhibitor/activator diffusivity ratio, classical Turing patterns emerge where the two species are in-phase, while, for small values of the diffusion ratio, the analysis predicts the formation of out-of-phase spatial structures (named cross-Turing patterns). In addition, for increasingly large values of the inhibitor cross-diffusion, the upper and lower bifurcation thresholds merge, so that the instability develops independently on the value of the diffusion ratio, whose magnitude selects Turing or cross-Turing patterns. Finally, the pattern selection problem is addressed through a weakly nonlinear analysis

    Lazy Network: A Word Embedding-Based Temporal Financial Network to Avoid Economic Shocks in Asset Pricing Models

    Get PDF
    Public companies in the US stock market must annually report their activities and financial performances to the SEC by filing the so-called 10-K form. Recent studies have demonstrated that changes in the textual content of the corporate annual filing (10-K) can convey strong signals of companies’ future returns. In this study, we combine natural language processing techniques and network science to introduce a novel 10-K-based network, named Lazy Network, that leverages year-on-year changes in companies’ 10-Ks detected using a neural network embedding model. (e Lazy Network aims to capture textual changes derived from financial or economic changes on the equity market. Leveraging the Lazy Network, we present a novel investment strategy that attempts to select the least disrupted and stable companies by capturing the peripheries of the Lazy Network. We show that this strategy earns statistically significant risk-adjusted excess returns. Specifically, the proposed portfolios yield up to 95 basis points in monthly five-factor alphas (over 12% annually), outperforming similar strategies in the literature

    Anomaly detection in laser-guided vehicles' batteries: a case study

    Full text link
    Detecting anomalous data within time series is a very relevant task in pattern recognition and machine learning, with many possible applications that range from disease prevention in medicine, e.g., detecting early alterations of the health status before it can clearly be defined as "illness" up to monitoring industrial plants. Regarding this latter application, detecting anomalies in an industrial plant's status firstly prevents serious damages that would require a long interruption of the production process. Secondly, it permits optimal scheduling of maintenance interventions by limiting them to urgent situations. At the same time, they typically follow a fixed prudential schedule according to which components are substituted well before the end of their expected lifetime. This paper describes a case study regarding the monitoring of the status of Laser-guided Vehicles (LGVs) batteries, on which we worked as our contribution to project SUPER (Supercomputing Unified Platform, Emilia Romagna) aimed at establishing and demonstrating a regional High-Performance Computing platform that is going to represent the main Italian supercomputing environment for both computing power and data volume.Comment: This paper contains a report on the research work carried out as a collaboration between the Department of Engineering and Architecture of the University of Parma and Elettric80 spa within project SUPER (Supercomputing Unified Platform Emilia Romagna

    Fine-Grained Agent-Based Modeling to Predict Covid-19 Spreading and Effect of Policies in Large-Scale Scenarios

    Get PDF
    Modeling and forecasting the spread of COVID-19 remains an open problem for several reasons. One of these concerns the difficulty to model a complex system at a high resolution (fine-grained) level at which the spread can be simulated by taking into account individual features such as the social structure, the effects of the governments’ policies, age sensitivity to Covid-19, maskwearing habits and geographical distribution of susceptible people. Agent-based modeling usually needs to find an optimal trade-off between the resolution of the simulation and the population size. Indeed, modeling single individuals usually leads to simulations of smaller populations or the use of meta-populations. In this article, we propose a solution to efficiently model the Covid-19 spread in Lombardy, the most populated Italian region with about ten million people. In particular, the model described in this paper is, to the best of our knowledge, the first attempt in literature to model a large population at the single-individual level. To achieve this goal, we propose a framework that implements: i. a scale-free model of the social contacts combining a sociability rate, demographic information, and geographical assumptions; ii. a multi-agent system relying on the actor model and the High-Performance Computing technology to efficiently implement ten million concurrent agents. We simulated the epidemic scenario from January to April 2020 and from August to December 2020, modeling the government’s lockdown policies and people’s maskwearing habits. The social modeling approach we propose could be rapidly adapted for modeling future epidemics at their early stage in scenarios where little prior knowledge is available

    The Silent Epidemic of Diabetic Ketoacidosis at Diagnosis of Type 1 Diabetes in Children and Adolescents in Italy During the COVID-19 Pandemic in 2020

    Get PDF
    To compare the frequency of diabetic ketoacidosis (DKA) at diagnosis of type 1 diabetes in Italy during the COVID-19 pandemic in 2020 with the frequency of DKA during 2017-2019
    • …
    corecore